6 research outputs found

    COVIWD: COVID-19 Wikidata Dashboard

    Get PDF
    COVID-19 (short for coronavirus disease 2019) is an emerging infectious disease that has had a tremendous impact on our daily lives. Globally, there have been over 95 million cases of COVID-19 and 2 million deaths across 191 countries and regions. The rapid spread and severity of COVID-19 call for a monitoring dashboard that can be developed quickly in an adaptable manner. Wikidata is a free, collaborative knowledge graph, collecting structured data about various themes, including that of COVID-19. We present COVIWD, a COVID-19 Wikidata dashboard, which provides a one-stop information/visualization service for topics related to COVID-19, ranging from symptoms and risk factors to comparison of cases and deaths among countries. The dashboard is one of the first that leverages open knowledge graph technologies, namely, RDF (for data modeling) and SPARQL (for querying), to give a live, concise snapshot of the COVID-19 pandemic. The use of both RDF and SPARQL enables rapid and flexible application development. COVIWD is available at http://coviwd.org

    Cardinal Virtues: Extracting Relation Cardinalities from Text

    Full text link
    Information extraction (IE) from text has largely focused on relations between individual entities, such as who has won which award. However, some facts are never fully mentioned, and no IE method has perfect recall. Thus, it is beneficial to also tap contents about the cardinalities of these relations, for example, how many awards someone has won. We introduce this novel problem of extracting cardinalities and discusses the specific challenges that set it apart from standard IE. We present a distant supervision method using conditional random fields. A preliminary evaluation results in precision between 3% and 55%, depending on the difficulty of relations.Comment: 5 pages, ACL 2017 (short paper

    LINKEDLAB: A DATA MANAGEMENT PLATFORM FOR RESEARCH COMMUNITIES USING LINKED DATA APPROACH

    Get PDF
    Data management has a key role on how we access, organize, and integrate data. Research community is one of the domain on which data is disseminated, e.g., projects, publications, and members.There is no well-established standard for doing so, and therefore the value of the data decreases, e.g. in terms of accessibility, discoverability, and reusability. LinkedLab proposes a platform to manage data for research communites using Linked Data technique. The use of Linked Data affords a more effective way to access, organize, and integrate the data. Manajemen data memilki peranan kunci dalam bagaimana kita mengakses, mengatur, dan mengintegrasikan data. Komunitas riset adalah salah satu domain dimana data disebarkan, contohnyadistribusi data dalam proyek, publikasi dan anggota. Tidak ada standar yang mengatur distribusi data selama ini.Oleh karena itu,value dari data cenderung menurun, contohnya dalam konteksaccessibility, discoverability, dan usability. LinkedLab merupakan sebuah usulanplatform untuk mengelola data untuk komunitas riset dengan menggunakan teknik Linked Data. Kegunaan Linked Data adalah sebuah cara yang efektif untuk mengakses, mengatur, dan mengitegrasikan data

    Managing and Consuming Completeness Information for RDF Data Sources

    Get PDF
    The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction

    Discovering Implicational Knowledge in Wikidata

    Full text link
    Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Examples include the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata. A distinguishing feature of Wikidata is that the knowledge is collaboratively edited and curated. While this greatly enhances the scope of Wikidata, it also makes it impossible for a single individual to grasp complex connections between properties or understand the global impact of edits in the graph. We apply Formal Concept Analysis to efficiently identify comprehensible implications that are implicitly present in the data. Although the complex structure of data modelling in Wikidata is not amenable to a direct approach, we overcome this limitation by extracting contextual representations of parts of Wikidata in a systematic fashion. We demonstrate the practical feasibility of our approach through several experiments and show that the results may lead to the discovery of interesting implicational knowledge. Besides providing a method for obtaining large real-world data sets for FCA, we sketch potential applications in offering semantic assistance for editing and curating Wikidata
    corecore